Microsoft has Released Counterfeit, an Open Source Tool for Preventing AI Hacking.
Microsoft has inaugurated an open-source appliance to enable originators to assess the protection of their device learning strategies.
The Counterfit program, now accessible on GitHub, encompasses a command-line method & generic mechanization coating to permit creators to imitate cyber assaults against AI policies.
Microsoft’s red committee has wielded Counterfit to examine its own AI models, while the wider organization is moreover analyzing wielding the tool in AI advancement.
Anyone can download the appliance & deploy it by Azure Shell, to operate in-browser, or locally in an Anaconda Python atmosphere.
It can evaluate AI categories hosted in several cloud atmospheres, on-premises, or on the edge. Microsoft moreover facilitated its flexibility by accentuating the reality that it’s irreligious to AI categories & moreover promotes a variety of data types, comprising text, pictures, or generic intake.
“Our device prepares disclosed attack algorithms convenient to the protection organization and assists to contribute an extensible interface from which to construct, develop, & blast off attacks on AI categories,” Microsoft explained.
“This device is a portion of widespread actions at Microsoft to authorize designers to securely acquire and deploy AI systems.”
The three crucial directions that protection experts can deploy Counterfit are by pen testing & red teaming AI operations, inspecting AI procedures for exposures, & registering attacks against AI models.
READ MORE: Microsoft Nixes Windows 10X